Goto

Collaborating Authors

 total number


Flexible Models for Microclustering with Application to Entity Resolution

Neural Information Processing Systems

Most generative models for clustering implicitly assume that the number of data points in each cluster grows linearly with the total number of data points. Finite mixture models, Dirichlet process mixture models, and Pitman--Yor process mixture models make this assumption, as do all other infinitely exchangeable clustering models. However, for some applications, this assumption is inappropriate. For example, when performing entity resolution, the size of each cluster should be unrelated to the size of the data set, and each cluster should contain a negligible fraction of the total number of data points. These applications require models that yield clusters whose sizes grow sublinearly with the size of the data set. We address this requirement by defining the microclustering property and introducing a new class of models that can exhibit this property. We compare models within this class to two commonly used clustering models using four entity-resolution data sets.


Global Convergence of Langevin Dynamics Based Algorithms for Nonconvex Optimization

Neural Information Processing Systems

We present a unified framework to analyze the global convergence of Langevin dynamics based algorithms for nonconvex finite-sum optimization with $n$ component functions. At the core of our analysis is a direct analysis of the ergodicity of the numerical approximations to Langevin dynamics, which leads to faster convergence rates. Specifically, we show that gradient Langevin dynamics (GLD) and stochastic gradient Langevin dynamics (SGLD) converge to the \textit{almost minimizer}\footnote{Following \citet{raginsky2017non}, an almost minimizer is defined to be a point which is within the ball of the global minimizer with radius $O(d\log(\beta+1)/\beta)$, where $d$ is the problem dimension and $\beta$ is the inverse temperature parameter.}









A Supplementary Analysis

Neural Information Processing Systems

To evaluate TSLD's efficiency, we detail training speeds and GPU memory consumption for various Our analysis of confidence disparity in token predictions, detailed in Section 4.2, extends beyond a In fact, this observed trend is consistently present across various GLM models. These errors are visualized using a heatmap plot (Fig. A2 top), For the OPT -6.7B model, quantization error is measured for the 5th and 15th layers. LLaMA-7B model, quantization errors are depicted for input sequence lengths of 128 and 512. From left to right: OPT -6.7B, LLaMA-7B, and LLaMA-2-7B. However, as we delve deeper into the layers of OPT -6.7B or introduce longer input sequences to LLaMA-7B, this phenomenon becomes less pronounced.